public trust
- Europe > United Kingdom (1.00)
- North America > United States (0.15)
- Oceania > Australia (0.08)
- Europe > Ukraine (0.05)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.35)
More Britons view AI as economic risk than opportunity, Tony Blair thinktank finds
Britons are concerned about AI's impact on the economy and jobs in particular. Britons are concerned about AI's impact on the economy and jobs in particular. TBI says poll data threatens Keir Starmer's ambition for UK to become artificial intelligence'superpower' The Tony Blair Institute warned that the poll findings threatened Keir Starmer's ambition for the UK to become an AI "superpower" and urged the government to convince the public of the technology's benefits. TBI commissioned a survey that found 38% of Britons see AI as an economic risk while 20% see it as an opportunity. The poll of more than 3,700 adults also showed that lack of trust was the biggest barrier to adoption.
- Europe > United Kingdom (1.00)
- North America > United States (0.15)
- Oceania > Australia (0.05)
- (3 more...)
- Information Technology > Communications > Social Media (0.51)
- Information Technology > Artificial Intelligence > Applied AI (0.31)
Do Large Language Models Have a Planning Theory of Mind? Evidence from MindGames: a Multi-Step Persuasion Task
Moore, Jared, Cooper, Ned, Overmark, Rasmus, Cibralic, Beba, Haber, Nick, Jones, Cameron R.
Recent evidence suggests Large Language Models (LLMs) display Theory of Mind (ToM) abilities. Most ToM experiments place participants in a spectatorial role, wherein they predict and interpret other agents' behavior. However, human ToM also contributes to dynamically planning action and strategically intervening on others' mental states. We present MindGames: a novel `planning theory of mind' (PToM) task which requires agents to infer an interlocutor's beliefs and desires to persuade them to alter their behavior. Unlike previous evaluations, we explicitly evaluate use cases of ToM. We find that humans significantly outperform o1-preview (an LLM) at our PToM task (11% higher; $p=0.006$). We hypothesize this is because humans have an implicit causal model of other agents (e.g., they know, as our task requires, to ask about people's preferences). In contrast, o1-preview outperforms humans in a baseline condition which requires a similar amount of planning but minimal mental state inferences (e.g., o1-preview is better than humans at planning when already given someone's preferences). These results suggest a significant gap between human-like social reasoning and LLM abilities.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > Virginia (0.04)
- North America > United States > Indiana (0.04)
- (3 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.66)
- Education (1.00)
- Leisure & Entertainment (0.93)
- Health & Medicine > Therapeutic Area > Neurology (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.47)
Cruise sidelines entire U.S. robotaxi fleet to focus on rebuilding 'public trust'
In the wake of California withdrawing Cruise's permit to operate self-driving cars in the state, the company said on Friday it's suspending all U.S. robotaxi operations. The move comes after the California Department of Motor Vehicles alleged that Cruise withheld from regulators video footage of a Cruise robotaxi dragging a person down a city street. The future for the company is anybody's guess. Its parent company, General Motors, has lost $1.9 billion on Cruise thus far this year, including a $732-million loss in the third quarter, according to its latest earnings report. Competitor Ford shut down its Argo robotaxi unit in 2002, concluding that the possibility of far-off profits weren't worth the enormous cash drain.
- North America > United States > California > San Francisco County > San Francisco (0.08)
- North America > United States > California > Los Angeles County > Los Angeles (0.07)
- North America > United States > California > Los Angeles County > Santa Monica (0.05)
- Transportation > Ground > Road (1.00)
- Information Technology > Robotics & Automation (1.00)
- Automobiles & Trucks (1.00)
Cruise puts robotaxi operations on pause following California license suspension
Cruise has paused all its driverless operations, the company has announced on LinkedIn and X. The GM-backed self-driving firm explained that it's taking time to examine its "processes, systems and tools" and that it will "reflect on how [it] can better operate in a way that will earn public trust." Cruise has been thrust under the spotlight recently after the California Department of Motor Vehicles (DMV) suspended its permits to operate driverless vehicles in the state due to several safety related issues. The California Public Utilities Commission also suspended the license giving Cruise the right to charge passengers for robotaxi rides. One of the latest incidents involving a Cruise vehicle happened in early October when a woman was hit by another car and was hurled in front of one of the company's driverless vehicles.
- Automobiles & Trucks (1.00)
- Transportation > Ground > Road (0.99)
- Information Technology > Robotics & Automation (0.66)
- Energy > Power Industry > Utilities (0.60)
Fiduciary Responsibility: Facilitating Public Trust in Automated Decision Making
Harper, Shannon B., Weber, Eric S.
Automated decision-making systems are being increasingly deployed and affect the public in a multitude of positive and negative ways. Governmental and private institutions use these systems to process information according to certain human-devised rules in order to address social problems or organizational challenges. Both research and real-world experience indicate that the public lacks trust in automated decision-making systems and the institutions that deploy them. The recreancy theorem argues that the public is more likely to trust and support decisions made or influenced by automated decision-making systems if the institutions that administer them meet their fiduciary responsibility. However, often the public is never informed of how these systems operate and resultant institutional decisions are made. A ``black box'' effect of automated decision-making systems reduces the public's perceptions of integrity and trustworthiness. The result is that the public loses the capacity to identify, challenge, and rectify unfairness or the costs associated with the loss of public goods or benefits. The current position paper defines and explains the role of fiduciary responsibility within an automated decision-making system. We formulate an automated decision-making system as a data science lifecycle (DSL) and examine the implications of fiduciary responsibility within the context of the DSL. Fiduciary responsibility within DSLs provides a methodology for addressing the public's lack of trust in automated decision-making systems and the institutions that employ them to make decisions affecting the public. We posit that fiduciary responsibility manifests in several contexts of a DSL, each of which requires its own mitigation of sources of mistrust. To instantiate fiduciary responsibility, a Los Angeles Police Department (LAPD) predictive policing case study is examined.
- North America > United States > California > Los Angeles County > Los Angeles (0.55)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- (9 more...)
- Law > Criminal Law (1.00)
- Banking & Finance (1.00)
- Government > Regional Government > North America Government > United States Government (0.67)
Artificial intelligence needs regulations that builds public trust in it
To build trust and confidence in the technology, laws should require organisations and governments to use AI in an ethical, safe and responsible manner that protects peoples' privacy. This means companies and the government must be accountable for the decisions their AI systems make. It means AI systems must be transparent and that an organisation can explain how a person's data is being used by the AI system. It means protections must be put in place to help reduce the risk that AI outputs are not biased or discriminatory. It means individuals are notified when AI is used to make a decision that affects their rights. It means there are boundaries on how high-risk AI systems can be used, and it means individuals have an appropriate legal recourse when those boundaries are broken.
- Government (1.00)
- Law > Civil Rights & Constitutional Law (0.57)
- Information Technology > Security & Privacy (0.55)
Artificial intelligence can enhance banking compliance
Technology has changed our society, and banks and other financial institutions have digitalized their operations at a rapid pace as well. However, the financial crime compliance units of these institutions still rely mainly on heavy manual processes. The banking compliance units' key reason for their cautious approach in the utilisation of AI and automation has been uncertainty about technology. Do regulators approve machine-based decision-making, and is machine learning logic fair in identifying suspicious activities? However, there is a clear need for utilising technology in financial crime compliance.
- Banking & Finance (1.00)
- Law Enforcement & Public Safety > Fraud (0.80)
UK sets out proposals for new AI rulebook to unleash innovation and boost public trust in the technology
New plans for regulating the use of artificial intelligence (AI) will be published today to help develop consistent rules to promote innovation in this groundbreaking technology and protect the public. It comes as the Data Protection and Digital Information Bill is introduced to Parliament which will transform the UK's data laws to boost innovation in technologies such as AI. The Bill will seize the benefits of Brexit to keep a high standard of protection for people's privacy and personal data while delivering around £1 billion in savings for businesses. Artificial Intelligence refers to machines which learn from data how to perform tasks normally performed by humans. For example, AI helps identify patterns in financial transactions that could indicate fraud and clinicians diagnose illnesses based on chest images.
- Law (1.00)
- Government > Regional Government > Europe Government > United Kingdom Government (1.00)
AI ethics - how do we put theory into practice when international approaches vary?
Many governments around the world have rightly put ethical development and deployment at the heart of their AI thinking. Core to this complex issue is a set of interconnected problems - AI systems that may automate societal problems, either due to a systemic lack of diversity in development teams, perhaps, or the use of training data that contains historic or structural biases. The design of systems may also be a factor. The result may be the algorithmic exclusion of individuals or groups because of their ethnicity, gender, sexuality, religion, or socioeconomic background. For example, facial recognition systems that misidentify black or Asian people because of a lack of relevant data; or CV-scanning applications that reject applicants from some postcodes/zip codes because, historically, human employers have actively excluded those jobseekers.
- Europe > United Kingdom (0.29)
- North America > United States (0.15)
- Oceania > New Zealand (0.05)
- Law (1.00)
- Government (1.00)